20 research outputs found

    A Formal Framework for Concrete Reputation Systems

    Get PDF
    In a reputation-based trust-management system, agents maintain information about the past behaviour of other agents. This information is used to guide future trust-based decisions about interaction. However, while trust management is a component in security decision-making, many existing reputation-based trust-management systems provide no formal security-guarantees. In this extended abstract, we describe a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents’ past histories. We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive (encoding e.g. Chinese Wall policies) we show how one can extend it with quantification and parameterized events. This allows us to encode other policies known from the literature, e.g., ‘one-out-of-k’. The problem of checking a history with respect to a policy is efficient for the basic language, and tractable for the quantified language when policies do not have too many variables

    A Logical Framework for Reputation Systems

    No full text
    Reputation systems are meta systems that record, aggregate and distribute information about the past behaviour of principals in an application. Typically, these applications are large-scale open distributed systems where principals are virtually anonymous, and (a priori) have no knowledge about the trustworthiness of each other. Reputation systems serve two primary purposes: helping principals decide whom to trust, and providing an incentive for principals to well-behave. A logical policy-based framework for reputation systems is presented. In the framework, principals specify policies which state precise requirements on the past behaviour of other principals that must be fulfilled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and efficient dynamic algorithms for checking whether a particular behaviour satisfies a property from the language. It is shown how the framework can be extended in several ways, most notably to encompass parameterized events and quantification over parameters. In an extended application, it is illustrated how the framework can be applied for dynamic history-based access control for safe execution of unknown and untrusted programs

    Caching and Auditing in the RPPM Model

    Full text link
    Crampton and Sellwood recently introduced a variant of relationship-based access control based on the concepts of relationships, paths and principal matching, to which we will refer as the RPPM model. In this paper, we show that the RPPM model can be extended to provide support for caching of authorization decisions and enforcement of separation of duty policies. We show that these extensions are natural and powerful. Indeed, caching provides far greater advantages in RPPM than it does in most other access control models and we are able to support a wide range of separation of duty policies.Comment: Accepted for publication at STM 2014 (without proofs, which are included in this longer version

    A Bayesian model for event-based trust

    No full text
    The application scenarios envisioned for ‘global ubiquitous computing’ have unique requirements that are often incompatible with traditional security paradigms. One alternative currently being investigated is to support security decision-making by explicit representation of principals’ trusting relationships, i.e., via systems for computational trust. We focus here on systems where trust in a computational entity is interpreted as the expectation of certain future behaviour based on behavioural patterns of the past, and concern ourselves with the foundations of such probabilistic systems. In particular, we aim at establishing formal probabilistic models for computational trust and their fundamental properties. In the paper we define a mathematical measure for quantitatively comparing the effectiveness of probabilistic computational trust systems in various environments. Using it, we compare some of the systems from the computational trust literature; the comparison is derived formally, rather than obtained via experimental simulation as traditionally done. With this foundation in place, we formalise a general notion of information about past behaviour, based on event structures. This yields a flexible trust model where the probability of complex protocol outcomes can be assessed

    Trust in Crowds: probabilistic behaviour in anonymity protocols

    No full text
    The existing analysis of the Crowds anonymity protocol assumes that a participating member is either ‘honest’ or ‘corrupted’. This paper generalises this analysis so that each member is assumed to maliciously disclose the identity of other nodes with a probability determined by her vulnerability to corruption. Within this model, the trust in a principal is defined to be the probability that she behaves honestly. We investigate the effect of such a probabilistic behaviour on the anonymity of the principals participating in the protocol, and formulate the necessary conditions to achieve ‘probable innocence’. Using these conditions, we propose a generalised Crowds-Trust protocol which uses trust information to achieves ‘probable innocence’ for principals exhibiting probabilistic behaviour

    A Celebration of The Journals of the Lewis and Clark Expedition

    Get PDF
    Remarks at reception honoring Gary Moulton for the completion of the 13-volume edition, the publication of the 10-volume paperback edition, the publication of the one-volume compilation, and the inauguration of the online pilot project and website http://lewisandclarkjournals.unl.edu; at the Center for Great Plains Studies, University of Nebraska-Lincoln, February 28, 2003. Remarks include a publication history of the scholarly edition 1983-2003, and its importance to the fields of Western history, American literature, native American studies, geography, and the literature of discovery and exploration. Topics include funding, outreach, honors, participants, and the impact on the scholarly world and on the local economy

    From Access Control to Trust Management, and Back – A Petition

    No full text
    Part 1: Extended Abstracts for Keynote SpeakersInternational audienceIn security too often services are understood not from first principles but via characteristic mechanisms used for their delivery. Access control had got tied up with DAC, MAC, RBAC and reference monitors. With developments in distributed systems security and with the opening of the Internet for commercial use new classes of access control mechanisms became relevant that did not fit into the established mold. Trust Management was coined as a term unifying the discussion of those mechanisms. We view trust as a placeholder that had its use in driving this research agenda, but argue that trust is so overloaded that it is now an impediment for further progress. Our petition asks for a return to access control and proposes a new framework for structuring investigations in this area

    Trust in Anonymity Networks

    No full text
    Anonymity is a security property of paramount importance, as we move steadily towards a wired, online community. Its import touches upon subjects as different as eGovernance, eBusiness and eLeisure, as well as personal freedom of speech in authoritarian societies. Trust metrics are used in anonymity networks to support and enhance reliability in the absence of verifiable identities, and a variety of security attacks currently focus on degrading a user's trustworthiness in the eyes of the other users. In this paper, we analyse the privacy guarantees of the \textsc{Crowds} anonymity protocol, with and without onion forwarding, for standard and adaptive attacks against the trust level of honest users
    corecore